16 research outputs found

    Estimation of Higher-order Regression via. Sparse Representation Model for Single Image Super-resolution Algorithm

    Get PDF
    Super-resolution algorithms generate high-resolution (HR) imagery from single or multiple low-resolution (LR) degraded images. In this paper, an efficient single image super-resolution (SR) algorithm using higher-order regression is proposed. Image patches extracted from HR image will have self-similar example patches near its corresponding location in the LR image. A higherorder regression function is learned using these self-similar example patches via. sparse representation model. The regression function is based on local approximations and henceforth estimated from the localized image patches. Taylor series is used as local approximation of the regression function and hence the zeroth order regression co-efficient will yield the local estimate of the regression function and the higher-order regression co-efficient will provide the local estimate of the higher-order derivative of the regression function. The learned higher-order regression mapping function is applied to LR image patches to approximate its corresponding HR version. The proposed super-resolution approach is evaluated with standard test images and is compared against state-of-the-art SR algorithms. It is observed that the proposed technique preserves sharp high-frequency (HF) details and reconstructs visually appealing HR images without introducing andy artifacts

    An Example-Based Super-Resolution Algorithm for Selfie Images

    Get PDF
    A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details

    Using machine learning for intelligent shard sizing on the cloud

    Get PDF
    Sharding implementations use conservative approximations for determining the number of cloud instances required and the size of the shards to be stored on each of them. Conservative approximations are often inaccurate and result in overloaded deployments, which need reactive refinement. Reactive refinement results in demand for additional resources from an already overloaded system and is counterproductive. This paper proposes an algorithm that eliminates the need for conservative approximations and reduces the need for reactive refinement. A multiple linear regression based machine learning algorithm is used to predict the latency of requests for a given application deployed on a cloud machine. The predicted latency helps to decide accurately and with certainty if the capacity of the cloud machine will satisfy the service level agreement for effective operation of the application. Application of the proposed methods on a popular database schema on the cloud resulted in highly accurate predictions. The results of the deployment and the tests performed to establish the accuracy have been presented in detail and are shown to establish the authenticity of the claims

    Efficient read monotonic data aggregation across shards on the cloud

    Get PDF
    Client-centric consistency models define the view of the data storage expected by a client in relation to the operations done by a client within a session. Monotonic reads is a client-centric consistency model which ensures that if a process has seen a particular value for the object, any subsequent accesses will never return any previous values. Monotonic reads are used in several applications like news feeds and social networks to ensure that the user always has a forward moving view of the data. The idea of Monotonic reads over multiple copies of the data and for lightly loaded systems is intuitive and easy to implement. For example, ensuring that a client session always fetches data from the same server automatically ensures that the user will never view old data. However, such a simplistic setup will not work for large deployments on the cloud, where the data is sharded across multiple high availability setups and there are several million clients accessing data at the same time. In such a setup it becomes necessary to ensure that the data fetched from multiple shards are logically consistent with each other. The use of trivial implementations, like sticky sessions, causes severe performance degradation during peak loads. This paper explores the challenges surrounding consistent monotonic reads over a sharded setup on the cloud and proposes an efficient architecture for the same. Performance of the proposed architecture is measured by implementing it on a cloud setup and measuring the response times for different shard counts. We show that the proposed solution scales with almost no change in performance as the number of shards increases

    Localization-affordability-saturation for speedy distribution of solar study lamps to millions

    No full text
    Millions of rural households in India lack access to clean basic lighting which hampers the ability of children to study during dark hours. Over 81 million students are likely to be dependent on kerosene lamps for basic lighting. Despite programmes focusing on disseminating solar lanterns since the last three decades, penetration is limited owing to three main barriers: affordability, absence of after-sales services and unavailability in rural markets. This paper presents the localization-affordability-saturation (LAS) model that addresses these main barriers by bringing together three institutional spheres-government, corporate and NGOs for speedy upscaling. This paper illustrates experiences and challenges that have emerged from field testing of the model through Million Solar Urja Lamp Programme while disseminating 1,000,000 solar study lamps to rural school children residing in more than 10,000 villages in India. Though implemented successfully, applicability of LAS model can be hindered due to dependence on subsidies, bottlenecks in local manufacturing infrastructure, absence of market mechanism and its relevance in providing complete lighting solutions

    Metal-related castability effects in aluminium foundry alloys

    No full text
    corecore